Computational Learning Theory Lecture 8: Expert Advice & Randomized Weighted Majority
نویسنده
چکیده
For the past few lectures we have been discussing online classification in scenarios in which there is a perfect target function. In this lecture we study a more general learning framework, learning from expert advice, which removes the assumption of a perfect target and applies to a wide range of problems beyond just classification. Suppose that you are interested in making a sequence of decisions based on the advice of a set of n “experts.” In this context, an expert could be a human being, such as a weather forecaster or a financial analyst. More generally, experts could represent simple decision rules based on features of a learning problem, or a collection of different learning algorithms. On each round, you must choose an expert to follow and each expert suffers a loss for its prediction. Your loss is then the loss of the expert you chose. For example, in the case of binary classification, an expert might suffer a loss of 0 if its prediction is correct and 1 if its prediction is incorrect. For convenience, we assume that losses are bounded in [0, 1], but it is easy to relax this assumption to any bounded range. We could formalize this setting as follows. Note that we have abstracted away the particular decisions that are being made, and look only at the loss of each decision.
منابع مشابه
Cascading randomized weighted majority: A new online ensemble learning algorithm
With the increasing volume of data in the world, the best approach for learning from this data is to exploit an online learning algorithm. Online ensemble methods are online algorithms which take advantage of an ensemble of classifiers to predict labels of data. Prediction with expert advice is a well-studied problem in the online ensemble learning literature. The Weighted Majority algorithm an...
متن کاملCos 511: Theoretical Machine Learning 1 Review of Last Lecture 2 Randomized Weighted Majority Algorithm (rwma)
Recall the online learning model we discussed in the previous lecture: N = # experts For t = 1, 2, . . . , T rounds: 1) each expert i, 1 ≤ i ≤ N , makes a prediction ξi ∈ {0, 1} 2) learner makes a prediction ŷ ∈ {0, 1} 3) observe outcome y ∈ {0, 1} (a mistake happens if ŷ 6= y) With this framework in hand, we investigated a particular algorithm, Weighted Majority Algorithm (WMA), as follows: N ...
متن کاملOnline Learning from Experts: Minimax Regret
In the last three lectures we have been discussing the online learning algorithms where we receive the instance x and then its label y for t = 1, ..., T . Specifically in the last lecture we talked about online learning from experts and online prediction. We saw many algorithms like Halving algorithm, Weighted Majority (WM) algorithm and lastly Weighted Majority Continuous (WMC) algorithm. We a...
متن کاملOn - line Learning and the Metrical Task
The problem of combining expert advice, studied extensively in the Computational Learning Theory literature, and the Metrical Task System (MTS) problem, studied extensively in the area of On-line Algorithms, contain a number of interesting similarities. In this paper we explore the relationship between these problems and show how algorithms designed for each can be used to achieve good bounds a...
متن کاملAggregation Algorithm vs. Average For Time Series Prediction
Learning with expert advice as a scheme of on-line learning has been very successfully applied to various learning problems due to its strong theoretical basis. In this paper, for the purpose of times series prediction, we investigate the application of Aggregation Algorithm, which a generalisation of the famous weighted majority algorithm. The results of the experiments done, show that the Agg...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2014